111 research outputs found

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    Manufacturing Barriers to Biologics Competition and Innovation

    Get PDF
    As finding breakthrough small-molecule drugs gets harder, drug companies are increasingly turning to “large molecule” biologics. Although biologics represent many of the most promising new therapies for previously intractable diseases, they are extremely expensive. Moreover, the pathway for generic-type competition set up by Congress in 2010 is unlikely to yield significant cost savings. In this Article, we provide a fresh diagnosis of, and prescription for, this major public policy problem. We argue that the key cause is pervasive trade secrecy in the complex area of biologics manufacturing. Under the current regime, this trade secrecy, combined with certain features of FDA regulation, not only creates high barriers to entry of indefinite duration but also undermines efforts to advance fundamental knowledge. In sharp contrast, offering incentives for information disclosure to originator manufacturers would leverage the existing interaction of trade secrecy and the regulatory state in a positive direction. Although trade secrecy, particularly in complex areas like biologics manufacturing, often involves tacit knowledge that is difficult to codify and thus transfer, in this case regulatory requirements that originator manufacturers submit manufacturing details have already codified the relevant tacit knowledge. Incentivizing disclosure of these regulatory submissions would not only spur competition but it would provide a rich source of information upon which additional research, including fundamental research into the science of manufacturing, could build. In addition to provide fresh diagnosis and prescription in the specific area of biologics, the Article contributes to more general scholarship on trade secrecy and tacit knowledge. Prior scholarship has neglected the extent to which regulation can turn tacit knowledge not only into codified knowledge but into precisely the type of codified knowledge that is most likely to be useful and accurate. The Article also draws a link to the literature on adaptive regulation, arguing that greater regulatory flexibility is necessary and that more fundamental knowledge should spur flexibility. A vastly shortened version of the central argument that manufacturing trade secrecy hampers biosimilar development was published at 348 Science 188 (2015), available online

    Nudging the FDA

    Get PDF
    [Excerpt] The FDA’s regulation of drugs is frequently the subject of policy debate, with arguments falling into two camps. On the one hand, a libertarian view of patients and the health care system holds high the value of consumer choice. Patients should get all the information and the drugs they want; the FDA should do what it can to enforce some basic standards but should otherwise get out of the way. On the other hand, a paternalist view values the FDA’s role as an expert agency standing between patients and a set of potentially dangerous drugs and potentially unscrupulous or at least insufficiently careful drug companies. We lay out here some of the ways the FDA regulates drugs, including some normally left out of the debate, and suggest a middle ground between libertarian and paternalistic approaches focused on correcting information asymmetry and aligning incentives.

    Distributed Governance of Medical AI

    Get PDF
    Artificial intelligence (AI) promises to bring substantial benefits to medicine. In addition to pushing the frontiers of what is humanly possible, like predicting kidney failure or sepsis before any human can notice, it can democratize expertise beyond the circle of highly specialized practitioners, like letting generalists diagnose diabetic degeneration of the retina. But AI doesn’t always work, and it doesn’t always work for everyone, and it doesn’t always work in every context. AI is likely to behave differently in well-resourced hospitals where it is developed than in poorly resourced frontline health environments where it might well make the biggest difference for patient care. To make the situation even more complicated, AI is unlikely to go through the centralized review and validation process that other medical technologies undergo, like drugs and most medical devices. Even if it did go through those centralized processes, ensuring high-quality performance across a wide variety of settings, including poorly resourced settings, is especially challenging for such centralized mechanisms. What are policymakers to do? This short Essay argues that the diffusion of medical AI, with its many potential benefits, will require policy support for a process of distributed governance, where quality evaluation and oversight take place in the settings of application—but with policy assistance in developing capacities and making that oversight more straightforward to undertake. Getting governance right will not be easy (it never is), but ignoring the issue is likely to leave benefits on the table and patients at risk

    Describing Black-Box Medicine

    Get PDF
    Personalized medicine is a touchstone of modern medical science, and is increasingly addressed in the legal literature. In personalized medicine, treatments are chosen and tailored based on the characteristics of the individual patient. However, personalized medicine today is largely limited to those relatively simple relationships that can be explicitly characterized and validated through the scientific process and through clinical trial

    Risks and Remedies for Artificial Intelligence in Healthcare

    Get PDF
    Artificial intelligence (AI) is rapidly entering health care and serving major roles, from automating drudgery and routine tasks in medical practice to managing patients and medical resources. As developers create AI systems to take on these tasks, several risks and challenges emerge, including the risk of injuries to patients from AI system errors, the risk to patient privacy of data acquisition and AI inference, and more. Potential solutions are complex but involve investment in infrastructure for high-quality, representative data; collaborative oversight by both the Food and Drug Administration and other health-care actors; and changes to medical education that will prepare providers for shifting roles in an evolving system

    Problematic Interactions between AI and Health Privacy

    Get PDF
    The interaction of artificial intelligence (“AI”) and health privacy is a two-way street. Both directions are problematic. This Article makes two main points. First, the advent of artificial intelligence weakens the legal protections for health privacy by rendering deidentification less reliable and by inferring health information from unprotected data sources. Second, the legal rules that protect health privacy nonetheless detrimentally impact the development of AI used in the health system by introducing multiple sources of bias: collection and sharing of data by a small set of entities, the process of data collection while following privacy rules, and the use of non-health data to infer health information. The result is an unfortunate anti- synergy: privacy protections are weak and illusory, but rules meant to protect privacy hinder other socially valuable goals. This state of affairs creates biases in health AI, privileges commercial research over academic research, and is ill-suited to either improve health care or protect patients’ privacy. The ongoing dysfunction calls for a new bargain between patients and the health system about the uses of patient data

    Generic Entry Jujitsu: Innovation and Quality in Drug Manufacturing

    Get PDF
    The manufacturing side of the pharmaceutical industry has been neglected in innovation theory and policy, with the unfortunate result of stagnant manufacturing techniques driving major problems for the healthcare system. This innovation failure has roots in ineffective intellectual property incentives and high regulatory hurdles to innovative change. Changes in pure regulation or intellectual property incentives have significant potential to help the innovation deficit, but are not the only possibility for change. A relatively minor regulatory change could harness the powerful dynamics of pioneer/generic competition surrounding generic drug market entry. If pioneer firms were permitted to make label claims committing to specific manufacturing quality standards above those required by regulation, generics would need to match those standards to match the pioneer label and win approval. This would create incentives for both pioneers and generics to improving manufacturing control and quality capabilities, ideally leading to a virtuous manufacturing quality arms race with benefits for both the healthcare system and industry itself

    Problematic Interactions between AI and Health Privacy

    Get PDF
    Problematic Interactions Between AI and Health Privacy Nicholson Price, University of Michigan Law SchoolFollow Abstract The interaction of artificial intelligence (AI) and health privacy is a two-way street. Both directions are problematic. This Essay makes two main points. First, the advent of artificial intelligence weakens the legal protections for health privacy by rendering deidentification less reliable and by inferring health information from unprotected data sources. Second, the legal rules that protect health privacy nonetheless detrimentally impact the development of AI used in the health system by introducing multiple sources of bias: collection and sharing of data by a small set of entities, the process of data collection while following privacy rules, and the use of non-health data to infer health information. The result is an unfortunate anti-synergy: privacy protections are weak and illusory, but rules meant to protect privacy hinder other socially valuable goals. The state of affairs creates biases in health AI, privileges commercial research over academic research, and is ill-suited to either improve health care or protect patients. The health system deeply needs a new bargain between patients and the health system about the uses of patient data
    • …
    corecore